Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 35
Filter
Add filters

Journal
Document Type
Year range
1.
Proceedings of SPIE - The International Society for Optical Engineering ; 12593, 2023.
Article in English | Scopus | ID: covidwho-20237503

ABSTRACT

In recent years, the outbreak of the COVID-19 epidemic has posed a serious threat to the life safety of people around the world, which has also led to the development of a series of online learning assessment technologies. Through the research and development of a variety of online learning platforms such as WeChat, Tencent Classroom and Netease Cloud Classroom, schools can carry out online learning assessment, which also promotes the rapid development of online learning technology. Through 2D and 3D recognition technology, the online learning platform can recognize face and pose changes. Based on 2D and 3D image processing technology, we can evaluate students' online learning, which will identify students' learning state and emotion. Through the granulation of teaching evaluation, online learning platform can accurately evaluate and analyze the teaching process, which can realize real-time teaching evaluation of students' learning status, including no one, many people, distraction and fatigue. Through relevant algorithms, the online learning platform can realize the assessment of students' head posture, which will give real-time warning of learning fatigue. Firstly, this paper analyzes the framework of online learning quality assessment. Then, this paper analyzes the face recognition and head pose recognition technology. Finally, some suggestions are put forward. © 2023 SPIE.

2.
Lecture Notes on Data Engineering and Communications Technologies ; 166:523-532, 2023.
Article in English | Scopus | ID: covidwho-20233251

ABSTRACT

Attendance marking in a classroom is a tedious and time-consuming task. Due to a large number of students present, there is always a possibility of proxy. In recent times, the task of automatic attendance marking has been extensively addressed via the use of fingerprint-based biometric systems, radio frequency identification tags, etc. However, these RFID systems lack the factor of dependability and due to COVID-19 use of fingerprint-based systems is not advisable. Instead of using these conventional methods, this paper presents an automated contactless attendance system that employs facial recognition to record student attendance and a gesture sensor to activate the camera when needed, thereby consuming minimal power. The resultant data is subsequently stored in Google Spreadsheets, and the reports can be viewed on the webpage. Thus, this work intends to make the attendance marking process contactless, efficient and simple. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

3.
2023 9th International Conference on Advanced Computing and Communication Systems, ICACCS 2023 ; : 220-225, 2023.
Article in English | Scopus | ID: covidwho-20232798

ABSTRACT

The whole world has been witnessing the gigantic enemy in the form of COVID-19 since March 2020. With its super-fast spread, it has devastated a major part of the world and found to be the most dangerous virus of the 21st Century. All countries went into a lockdown to control the spread of the virus, and the economy dropped down to an all- time low index. The major guideline to avoid the spread of diseases like COVID- 19 at work is avoiding contact with people and their belongings. It is not safe to use computing devices because it may result in the spread of the virus by touching them. This paper presents an Artificial Intelligence- based virtual mouse that detects or recognizes hand gestures to control the various functions of a personal computer. The virtual mouse Algorithm uses a webcam or a built-in camera of the system to capture hand gestures, then uses an algorithm to detect the palm boundaries similar to that of the face detection model of the media pipe face mesh algorithm. After tracing the palm boundaries, it uses a regression model and locates the 21 3D hand-knuckle coordinate points inside the recognized hand/palm boundaries. Once the Hand Landmarks are detected, they are used to call windows Application Programming Interface (API) functions to control the functionalities of the system. The proposed algorithm is tested for volume control and cursor control in a laptop with the Windows operating system and a webcam. The proposedsystem took only 1ms to identify the gestures and control the volume and cursor in real-time. © 2023 IEEE.

4.
2nd International Conference for Innovation in Technology, INOCON 2023 ; 2023.
Article in English | Scopus | ID: covidwho-2326348

ABSTRACT

In today's post-covid culture, where everyone works from home, there is a huge possibility of serious long-term health problems. A lot of people have started taking up exercises at home and if done incorrectly, they can have major negative effects. Another one of the main contributors to these health issues is bad sitting posture, which is only exacerbated when working for hours on end. Hand gesture detection has many useful applications in elderly healthcare, automating actions and gesture-based presentations and games. To help users with these actions, our paper proposes pinpointing the points of the error to the user in real-time and in a lightweight manner for yoga posture correction. The incorrect positions shall be shown in real-time on top of the user's video feed to help them correct it properly. The user shall be told about when they are sitting in a bad position, and the overall bad posture time will also be shown for the session, which will provide the required information to the user. To further help users in a useful manner, our paper looks to augment the hand gesture detection feature with federated learning and personalization to avoid the common pitfall of privacy concerns, while still allowing users to customize their experience. The proposed library for the implementation of these tasks is the MediaPipe library. This library is one of the key components that makes the features lightweight and easy to use. The aforementioned library also looks to implement the features in real time with no lag while keeping the resource requirements as low as possible. © 2023 IEEE.

5.
1st international conference on Machine Intelligence and Computer Science Applications, ICMICSA 2022 ; 656 LNNS:119-128, 2023.
Article in English | Scopus | ID: covidwho-2294712

ABSTRACT

Hand gestures are part of communication tools that allows people to express their ideas and feelings. Those gestures can be used to insure a communication not only between people but also to replace traditional devices in human-machine interaction (HCI). This last leads us to use this technology in the E-learning domain. COVID'19 pandemic has attest the importance of E-learning. However, the Practical Activities (PA), as an important part of the learning process, are absent in the majority of E-learning plateforms. Therefore, this paper proposes a convolution neural network (CNN) method to ensure the detection of the hand gestures so the user can control and manipulate the virtual objects in the PA environment using a simple camera. To achieve this goal two datasets have been merged. Also the skin model and background subtraction were applied to obtain a performed training and testing datasets for the CNN. Experimental evaluation shows an accuracy rate of 97,2.%. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

6.
J Imaging ; 9(4)2023 Apr 21.
Article in English | MEDLINE | ID: covidwho-2301024

ABSTRACT

The COVID-19 pandemic has underscored the need for real-time, collaborative virtual tools to support remote activities across various domains, including education and cultural heritage. Virtual walkthroughs provide a potent means of exploring, learning about, and interacting with historical sites worldwide. Nonetheless, creating realistic and user-friendly applications poses a significant challenge. This study investigates the potential of collaborative virtual walkthroughs as an educational tool for cultural heritage sites, with a focus on the Sassi of Matera, a UNESCO World Heritage Site in Italy. The virtual walkthrough application, developed using RealityCapture and Unreal Engine, leveraged photogrammetric reconstruction and deep learning-based hand gesture recognition to offer an immersive and accessible experience, allowing users to interact with the virtual environment using intuitive gestures. A test with 36 participants resulted in positive feedback regarding the application's effectiveness, intuitiveness, and user-friendliness. The findings suggest that virtual walkthroughs can provide precise representations of complex historical locations, promoting tangible and intangible aspects of heritage. Future work should focus on expanding the reconstructed site, enhancing the performance, and assessing the impact on learning outcomes. Overall, this study highlights the potential of virtual walkthrough applications as a valuable resource for architecture, cultural heritage, and environmental education.

7.
Sensors (Basel) ; 23(7)2023 Mar 24.
Article in English | MEDLINE | ID: covidwho-2300985

ABSTRACT

Automated hand gesture recognition is a key enabler of Human-to-Machine Interfaces (HMIs) and smart living. This paper reports the development and testing of a static hand gesture recognition system using capacitive sensing. Our system consists of a 6×18 array of capacitive sensors that captured five gestures-Palm, Fist, Middle, OK, and Index-of five participants to create a dataset of gesture images. The dataset was used to train Decision Tree, Naïve Bayes, Multi-Layer Perceptron (MLP) neural network, and Convolutional Neural Network (CNN) classifiers. Each classifier was trained five times; each time, the classifier was trained using four different participants' gestures and tested with one different participant's gestures. The MLP classifier performed the best, achieving an average accuracy of 96.87% and an average F1 score of 92.16%. This demonstrates that the proposed system can accurately recognize hand gestures and that capacitive sensing is a viable method for implementing a non-contact, static hand gesture recognition system.


Subject(s)
Gestures , Pattern Recognition, Automated , Humans , Bayes Theorem , Pattern Recognition, Automated/methods , Neural Networks, Computer , Machine Learning , Hand , Algorithms
8.
2023 International Conference on Electronics, Information, and Communication, ICEIC 2023 ; 2023.
Article in English | Scopus | ID: covidwho-2283274

ABSTRACT

Recently, with the outbreak of the COVID-19 pandemic, various quarantine measures have been implemented to reduce the spread of the virus. As a part of efforts, the preference for touchless technology has been emerging. In this paper, we propose a touchless elevator control system using CNN-based hand gesture recognition. Experimental results show that the hand recognition AP and FPS on the Jetson TX2 board are 81.87% and 11.8FPS, respectively. We demonstrate that an elevator model could be controlled by virtual elevator buttons utilizing CNN-based hand gesture recognition. The proposed method can be applied to commercial elevators as an approach to prevent the spread of viruses from elevator buttons. © 2023 IEEE.

9.
Sensors (Basel) ; 23(4)2023 Feb 05.
Article in English | MEDLINE | ID: covidwho-2286238

ABSTRACT

With the global spread of the novel coronavirus, avoiding human-to-human contact has become an effective way to cut off the spread of the virus. Therefore, contactless gesture recognition becomes an effective means to reduce the risk of contact infection in outbreak prevention and control. However, the recognition of everyday behavioral sign language of a certain population of deaf people presents a challenge to sensing technology. Ubiquitous acoustics offer new ideas on how to perceive everyday behavior. The advantages of a low sampling rate, slow propagation speed, and easy access to the equipment have led to the widespread use of acoustic signal-based gesture recognition sensing technology. Therefore, this paper proposed a contactless gesture and sign language behavior sensing method based on ultrasonic signals-UltrasonicGS. The method used Generative Adversarial Network (GAN)-based data augmentation techniques to expand the dataset without human intervention and improve the performance of the behavior recognition model. In addition, to solve the problem of inconsistent length and difficult alignment of input and output sequences of continuous gestures and sign language gestures, we added the Connectionist Temporal Classification (CTC) algorithm after the CRNN network. Additionally, the architecture can achieve better recognition of sign language behaviors of certain people, filling the gap of acoustic-based perception of Chinese sign language. We have conducted extensive experiments and evaluations of UltrasonicGS in a variety of real scenarios. The experimental results showed that UltrasonicGS achieved a combined recognition rate of 98.8% for 15 single gestures and an average correct recognition rate of 92.4% and 86.3% for six sets of continuous gestures and sign language gestures, respectively. As a result, our proposed method provided a low-cost and highly robust solution for avoiding human-to-human contact.


Subject(s)
COVID-19 , Ultrasonics , Humans , Gestures , Sign Language , Acoustics
10.
Adv Sci (Weinh) ; 10(6): e2205960, 2023 02.
Article in English | MEDLINE | ID: covidwho-2262047

ABSTRACT

Recent advances in flexible wearable devices have boosted the remarkable development of devices for human-machine interfaces, which are of great value to emerging cybernetics, robotics, and Metaverse systems. However, the effectiveness of existing approaches is limited by the quality of sensor data and classification models with high computational costs. Here, a novel gesture recognition system with triboelectric smart wristbands and an adaptive accelerated learning (AAL) model is proposed. The sensor array is well deployed according to the wrist anatomy and retrieves hand motions from a distance, exhibiting highly sensitive and high-quality sensing capabilities beyond existing methods. Importantly, the anatomical design leads to the close correspondence between the actions of dominant muscle/tendon groups and gestures, and the resulting distinctive features in sensor signals are very valuable for differentiating gestures with data from 7 sensors. The AAL model realizes a 97.56% identification accuracy in training 21 classes with only one-third operands of the original neural network. The applications of the system are further exploited in real-time somatosensory teleoperations with a low latency of <1 s, revealing a new possibility for endowing cyber-human interactions with disruptive innovation and immersive experience.


Subject(s)
Hand , Wearable Electronic Devices , Humans , Neural Networks, Computer , Gestures
11.
Intelligent Systems with Applications ; 17, 2023.
Article in English | Scopus | ID: covidwho-2238890

ABSTRACT

In April 2020, by the start of isolation all around the world to counter the spread of COVID-19, an increase in violence against women and kids has been observed such that it has been named The Shadow Pandemic. To fight against this phenomenon, a Canadian foundation proposed the "Signal for Help” gesture to help people in danger to alert others of being in danger, discreetly. Soon, this gesture became famous among people all around the world, and even after COVID-19 isolation, it has been used in public places to alert them of being in danger and abused. However, the problem is that the signal works if people recognize it and know what it means. To address this challenge, we present a workflow for real-time detection of "Signal for Help” based on two lightweight CNN architectures, dedicated to hand palm detection and hand gesture classification, respectively. Moreover, due to the lack of a "Signal for Help” dataset, we create the first video dataset representing the "Signal for Help” hand gesture for detection and classification applications which includes 200 videos. While the hand-detection task is based on a pre-trained network, the classifying network is trained using the publicly available Jesture dataset, including 27 classes, and fine-tuned with the "Signal for Help” dataset through transfer learning. The proposed platform shows an accuracy of 91.25% with a video processing capability of 16 fps executed on a machine with an Intel i9-9900K@3.6 GHz CPU, 31.2 GB memory, and NVIDIA GeForce RTX 2080 Ti GPU, while it reaches 6 fps when running on Jetson Nano NVIDIA developer kit as an embedded platform. The high performance and small model size of the proposed approach ensure great suitability for resource-limited devices and embedded applications which has been confirmed by implementing the developed framework on the Jetson Nano Developer Kit. A comparison between the developed framework and the state-of-the-art hand detection and classification models shows a negligible reduction in the validation accuracy, around 3%, while the proposed model required 4 times fewer resources for implementation, and inference has a speedup of about 50% on Jetson Nano platform, which make it highly suitable for embedded systems. The developed platform as well as the created dataset are publicly available. © 2022

12.
2nd International Conference on Signal and Information Processing, IConSIP 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2235187

ABSTRACT

Kiosk machines have gained good popularity among the general public as they are easy to operate and provide a good interactive interface. As a result, multiple users use the kiosk machine throughout the day to find the information they are looking for. Users interact with the kiosk machine by the means of touching its screen or using the buttons. Due to this, it is observed that throughout the day hundreds or even thousands of people end up touching the surface of the kiosk machine. Because of this hygiene cannot be maintained as it is not possible to sanitize the kiosk machine after each use. This has become a serious issue considering the effects that the Covid-19 pandemic had on the world. Multiple people touching the same surface is one of the most common ways through which the virus can spread. To help deal with this problem we have designed a gesture control system using deep learning techniques through which kiosk machines can be operated in a touch-less way. © 2022 IEEE.

13.
Arab J Sci Eng ; : 1-14, 2022 Apr 22.
Article in English | MEDLINE | ID: covidwho-2236627

ABSTRACT

The rapid spread of the novel corona virus disease (COVID-19) has disrupted the traditional clinical services all over the world. Hospitals and healthcare centers have taken extreme care to minimize the risk of exposure to the virus by restricting the visitors and relatives of the patients. The dramatic changes happened in the healthcare norms have made it hard for the deaf patients to communicate and receive appropriate care. This paper reports a work on automatic sign language recognition that can mitigate the communication barrier between the deaf patients and the healthcare workers in India. Since hand gestures are the most expressive components of a sign language vocabulary, a novel dataset of dynamic hand gestures for the Indian sign language (ISL) words commonly used for emergency communication by deaf COVID-19 positive patients is proposed. A hybrid model of deep convolutional long short-term memory network has been utilized for the recognition of the proposed hand gestures and achieved an average accuracy of 83.36%. The model performance has been further validated on an alternative ISL dataset as well as a benchmarking hand gesture dataset and obtained average accuracies of 97 % and 99.34 ± 0.66 % , respectively.

14.
4th International Conference on Inventive Research in Computing Applications, ICIRCA 2022 ; : 1266-1271, 2022.
Article in English | Scopus | ID: covidwho-2213278

ABSTRACT

A User Interface is a form of establishing a platform in which a human interacts with machines. There have been different types of user interfaces evolving over the period in line with the rapid growth of technology. Some of the most popular input devices used are mice, keyboards, touchscreens and styluses. A graphical interface is user friendly and, as a result, is widely used. Systems of contactless communication surfaces for interactions have been introduced to decrease the spread of germs and combat diseases like covid-19. This system also can be utilized by disabled people who still retain motor function in the hand and forearm. This paper put forward an AI-assisted virtual mouse system where these drawbacks are solved by utilizing a webcam/built-in camera for recording the motions of the hand and translating them into mouse actions via ML algorithms. The mouse actions are performed based on hand gestures, which are used to control the computer virtually. The hand detection algorithm is based on a deep learning model. So, the proposed system can reduce the spread of germs and help a computer be more accessible to people with special needs. © 2022 IEEE.

15.
9th International Conference on Wireless Networks and Mobile Communications, WINCOM 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2192125

ABSTRACT

Due to hygienic concerns, the COVID-19 has accelerated the use of touchless technology, which has expedited the transition to Zero User Interface. This Interface (UI) refers to a controlled user interface that enables interaction with technology using gestures, voice, eye tracking and biometrics like contactless fingerprints and facial recognition. These control interfaces incorporate interaction techniques like speech and gestures. The advancement of touchless interaction with hand gesture interfaces is the main topic of this study, these interfaces are specialized programs that track and predict hand gestures to give alternative controls and interaction techniques. The hand gesture interface consisting of four main layers: Hand Gesture Interface, Mapping Gesture, Action, The Input Simulator, and Graphical User Interface. In addition to our interface, we employed the new algorithm for according to the hand-type classification. By doing trials on a gastronomy application, we confirm our methodology. With five volunteers, we conducted a small-scale user study to evaluate and test the hand gesture interface. User feedback indicates that the hand gesture interface is simple to be used. © 2022 IEEE.

16.
Intelligent Systems with Applications ; : 200174, 2023.
Article in English | ScienceDirect | ID: covidwho-2165437

ABSTRACT

In April 2020, by the start of isolation all around the world to counter the spread of COVID-19, an increase in violence against women and kids has been observed such that it has been named The Shadow Pandemic. To fight against this phenomenon, a Canadian foundation proposed the "Signal for Help” gesture to help people in danger to alert others of being in danger, discreetly. Soon, this gesture became famous among people all around the world, and even after COVID-19 isolation, it has been used in public places to alert them of being in danger and abused. However, the problem is that the signal works if people recognize it and know what it means. To address this challenge, we present a workflow for real-time detection of "Signal for Help” based on two lightweight CNN architectures, dedicated to hand palm detection and hand gesture classification, respectively. Moreover, due to the lack of a "Signal for Help” dataset, we create the first video dataset representing the "Signal for Help” hand gesture for detection and classification applications which includes 200 videos. While the hand-detection task is based on a pre-trained network, the classifying network is trained using the publicly available Jesture dataset, including 27 classes, and fine-tuned with the "Signal for Help” dataset through transfer learning. The proposed platform shows an accuracy of 91.25 % with a video processing capability of 16 fps executed on a machine with an Intel i9-9900K@3.6 GHz CPU, 31.2 GB memory, and NVIDIA GeForce RTX 2080 Ti GPU, while it reaches 6 fps when running on Jetson Nano NVIDIA developer kit as an embedded platform. The high performance and small model size of the proposed approach ensure great suitability for resource-limited devices and embedded applications which has been confirmed by implementing the developed framework on the Jetson Nano Developer Kit. A comparison between the developed framework and the state-of-the-art hand detection and classification models shows a negligible reduction in the validation accuracy, around 3%, while the proposed model required 4 times fewer resources for implementation, and inference has a speedup of about 50% on Jetson Nano platform, which make it highly suitable for embedded systems. The developed platform as well as the created dataset are publicly available.

17.
17th International Conference on Wireless Algorithms, Systems, and Applications, WASA 2022 ; 13472 LNCS:267-278, 2022.
Article in English | Scopus | ID: covidwho-2148603

ABSTRACT

In the current critical situation of novel coronavirus, the use of contactless gesture recognition method can reduce human contact and decrease the probability of virus transmission. In this context, ultrasound-based sensing has been widely concerned for its slow propagation speed, low sampling rate, and easy access to devices. However, limited by the complexity of gestural movements and insufficient training data, the accuracy and robustness of gesture recognition are low. To solve this problem, we propose UltrasonicG, a system for highly robust gesture recognition on ultrasonic devices. The system first converts a single audio signal into a Doppler shift and subsequently extracts the feature values using the Residual Neural Network (ResNet34) and uses Bi-directional Long Short-Term Memory (Bi-LSTM) for gesture recognition. The method effectively improves the accuracy of gesture recognition by combining the information of feature dimension with time dimension. To overcome the challenge of insufficient dataset, we use data extension to expand the dataset. We have conducted extensive experiments and evaluations on UltrasonicG in a variety of real scenarios. The experimental results show that UltrasonicG can recognize 15 kinds of gestures with a recognition distance of 0.5 m. And it has a high accuracy and robustness with a comprehensive recognition rate of 98.8% under different environments and influencing factors. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

18.
Healthcare (Basel) ; 10(12)2022 Dec 10.
Article in English | MEDLINE | ID: covidwho-2154954

ABSTRACT

In recent decades, epidemic and pandemic illnesses have grown prevalent and are a regular source of concern throughout the world. The extent to which the globe has been affected by the COVID-19 epidemic is well documented. Smart technology is now widely used in medical applications, with the automated detection of status and feelings becoming a significant study area. As a result, a variety of studies have begun to focus on the automated detection of symptoms in individuals infected with a pandemic or epidemic disease by studying their body language. The recognition and interpretation of arm and leg motions, facial recognition, and body postures is still a developing field, and there is a dearth of comprehensive studies that might aid in illness diagnosis utilizing artificial intelligence techniques and technologies. This literature review is a meta review of past papers that utilized AI for body language classification through full-body tracking or facial expressions detection for various tasks such as fall detection and COVID-19 detection, it looks at different methods proposed by each paper, their significance and their results.

19.
Journal of Graphics ; 43(3):504-512, 2022.
Article in Chinese | Scopus | ID: covidwho-2145245

ABSTRACT

Due to the coronavirus pandemic, the non-touch personal signature can reduce the risk of infection to a certain extent, which is of great significance to our daily life. Therefore, a simple and efficient spatiotemporal fusion network was proposed to realize skeleton-based dynamic hand gesture recognition, based on which a virtual signature system was developed. The spatiotemporal fusion network is mainly composed of spatiotemporal fusion modules based on the attention mechanism, and its key idea is to synchronously realize the extraction and fusion of spatiotemporal features using an incremental method. This network adopts different spatiotemporal coding features as inputs, and employs the double sliding window mechanism for post-processing in practical applications, thus ensuring more stable and robust results. Extensive comparative experiments on two benchmark datasets demonstrate that the proposed method outperforms the state-of-the-art single-stream network. Besides, the virtual signature system performs well with a single normal RGB camera, which not only greatly reduces the complexity of the interaction system, but also provides a more convenient and secure approach to personal signature. © 2022, Editorial of Board of Journal of Graphics. All rights reserved.

20.
2022 IEEE International Conference on Consumer Electronics - Taiwan, ICCE-Taiwan 2022 ; : 23-24, 2022.
Article in English | Scopus | ID: covidwho-2051990

ABSTRACT

Hand hygiene has become even more im-portant in light of the COVID-19 pandemic, where hands are one of the high-risk transmission routes. Existing hand-hygiene education is focused on one-time training and does not ensure that correct handwashing procedures are undertaken. Our study, therefore, proposes a hand-hygiene education and facilitation system. Compared to previous systems, through an external RGB camera with our proposed image preprocessing and use the 3-D convo-lution and convolutional long short-term memory (Con-vLSTM) models to detect correctness of handwashing postures, which also facilitates children's ability to wash their hands properly through an on-screen tutorial. It also encourages children to develop good handwashing habits through a positive competition and reward system, and helps teachers to understand children's learning pro-gresses. The experimental results showed that the model was able to identify handwashing postures in real-time with 95.12% accuracy in a realistic and variable environ-ment. © 2022 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL